527 research outputs found

    Fast Biclustering by Dual Parameterization

    Get PDF
    We study two clustering problems, Starforest Editing, the problem of adding and deleting edges to obtain a disjoint union of stars, and the generalization Bicluster Editing. We show that, in addition to being NP-hard, none of the problems can be solved in subexponential time unless the exponential time hypothesis fails. Misra, Panolan, and Saurabh (MFCS 2013) argue that introducing a bound on the number of connected components in the solution should not make the problem easier: In particular, they argue that the subexponential time algorithm for editing to a fixed number of clusters (p-Cluster Editing) by Fomin et al. (J. Comput. Syst. Sci., 80(7) 2014) is an exception rather than the rule. Here, p is a secondary parameter, bounding the number of components in the solution. However, upon bounding the number of stars or bicliques in the solution, we obtain algorithms which run in time O(2^{3*sqrt(pk)} + n + m) for p-Starforest Editing and O(2^{O(p * sqrt(k) * log(pk))} + n + m) for p-Bicluster Editing. We obtain a similar result for the more general case of t-Partite p-Cluster Editing. This is subexponential in k for a fixed number of clusters, since p is then considered a constant. Our results even out the number of multivariate subexponential time algorithms and give reasons to believe that this area warrants further study

    A Faster Parameterized Algorithm for Treedepth

    Full text link
    The width measure \emph{treedepth}, also known as vertex ranking, centered coloring and elimination tree height, is a well-established notion which has recently seen a resurgence of interest. We present an algorithm which---given as input an nn-vertex graph, a tree decomposition of the graph of width ww, and an integer tt---decides Treedepth, i.e. whether the treedepth of the graph is at most tt, in time 2O(wt)n2^{O(wt)} \cdot n. If necessary, a witness structure for the treedepth can be constructed in the same running time. In conjunction with previous results we provide a simple algorithm and a fast algorithm which decide treedepth in time 22O(t)n2^{2^{O(t)}} \cdot n and 2O(t2)n2^{O(t^2)} \cdot n, respectively, which do not require a tree decomposition as part of their input. The former answers an open question posed by Ossona de Mendez and Nesetril as to whether deciding Treedepth admits an algorithm with a linear running time (for every fixed tt) that does not rely on Courcelle's Theorem or other heavy machinery. For chordal graphs we can prove a running time of 2O(tlogt)n2^{O(t \log t)}\cdot n for the same algorithm.Comment: An extended abstract was published in ICALP 2014, Track

    Fast Biclustering by Dual Parameterization

    Get PDF
    We study two clustering problems, Starforest Editing, the problem of adding and deleting edges to obtain a disjoint union of stars, and the generalization Bicluster Editing. We show that, in addition to being NP-hard, none of the problems can be solved in subexponential time unless the exponential time hypothesis fails. Misra, Panolan, and Saurabh (MFCS 2013) argue that introducing a bound on the number of connected components in the solution should not make the problem easier: In particular, they argue that the subexponential time algorithm for editing to a fixed number of clusters (p-Cluster Editing) by Fomin et al. (J. Comput. Syst. Sci., 80(7) 2014) is an exception rather than the rule. Here, p is a secondary parameter, bounding the number of components in the solution. However, upon bounding the number of stars or bicliques in the solution, we obtain algorithms which run in time 25pk+O(n+m)2^{5 \sqrt{pk}} + O(n+m) for p-Starforest Editing and 2O(pklog(pk))+O(n+m)2^{O(p \sqrt{k} \log(pk))} + O(n+m) for p-Bicluster Editing. We obtain a similar result for the more general case of t-Partite p-Cluster Editing. This is subexponential in k for fixed number of clusters, since p is then considered a constant. Our results even out the number of multivariate subexponential time algorithms and give reasons to believe that this area warrants further study.Comment: Accepted for presentation at IPEC 201

    Lower Bounds on the Complexity of MSO_1 Model-Checking

    Get PDF
    One of the most important algorithmic meta-theorems is a famous result by Courcelle, which states that any graph problem definable in monadic second-order logic with edge-set quantifications (MSO2) is decidable in linear time on any class of graphs of bounded tree-width. In the parlance of parameterized complexity, this means that MSO2 model-checking is fixed-parameter tractable with respect to the tree-width as parameter. Recently, Kreutzer and Tazari proved a corresponding complexity lower-bound---that MSO2 model-checking is not even in XP wrt the formula size as parameter for graph classes that are subgraph-closed and whose tree-width is poly-logarithmically unbounded. Of course, this is not an unconditional result but holds modulo a certain complexity-theoretic assumption, namely, the Exponential Time Hypothesis (ETH). In this paper we present a closely related result. We show that even MSO1 model-checking with a fixed set of vertex labels, but without edge-set quantifications, is not in XP wrt the formula size as parameter for graph classes which are subgraph-closed and whose tree-width is poly-logarithmically unbounded unless the non-uniform ETH fails. In comparison to Kreutzer and Tazari, (1) we use a stronger prerequisite, namely non-uniform instead of uniform ETH, to avoid the effectiveness assumption and the construction of certain obstructions used in their proofs; and (2) we assume a different set of problems to be efficiently decidable, namely MSO1-definable properties on vertex labeled graphs instead of MSO2-definable properties on unlabeled graphs. Our result has an interesting consequence in the realm of digraph width measures: Strengthening a recent result, we show that no subdigraph-monotone measure can be algorithmically useful, unless it is within a poly-logarithmic factor of (undirected) tree-width

    Preferential potentiation of the effects of serotonin uptake inhibitors by 5-HT1A receptor antagonists in the dorsal raphe pathway: role of somatodendritic autoreceptors

    Get PDF
    5-HT1A autoreceptor antagonists enhance the effects of antidepressants by preventing a negative feedback of serotonin (5-HT) at somatodendritic level. The maximal elevations of extracellular concentration of 5-HT (5-HT(ext)) induced by the 5-HT uptake inhibitor paroxetine in forebrain were potentiated by the 5-HT1A antagonist WAY-100635 (1 mg/kg s.c.) in a regionally dependent manner (striatum > frontal cortex > dorsal hippocampus). Paroxetine (3 mg/kg s.c.) decreased forebrain 5-HT(ext) during local blockade of uptake. This reduction was greater in striatum and frontal cortex than in dorsal hippocampus and was counteracted by the local and systemic administration of WAY-100635. The perfusion of 50 micromol/L citalopram in the dorsal or median raphe nucleus reduced 5-HT(ext) in frontal cortex or dorsal hippocampus to 40 and 65% of baseline, respectively. The reduction of cortical 5-HT(ext) induced by perfusion of citalopram in midbrain raphe was fully reversed by WAY-100635 (1 mg/kg s.c.). Together, these data suggest that dorsal raphe neurons projecting to striatum and frontal cortex are more sensitive to self-inhibition mediated by 5-HT1A autoreceptors than median raphe neurons projecting to the hippocampus. Therefore, potentiation by 5-HT1A antagonists occurs preferentially in forebrain areas innervated by serotonergic neurons of the dorsal raphe nucleus.Peer reviewe

    Lower Bounds on the Complexity of MSO1 Model-Checking

    Full text link
    One of the most important algorithmic meta-theorems is a famous result by Courcelle, which states that any graph problem definable in monadic second-order logic with edge-set quantifications (i.e., MSO2 model-checking) is decidable in linear time on any class of graphs of bounded tree-width. Recently, Kreutzer and Tazari proved a corresponding complexity lower-bound - that MSO2 model-checking is not even in XP wrt. the formula size as parameter for graph classes that are subgraph-closed and whose tree-width is poly-logarithmically unbounded. Of course, this is not an unconditional result but holds modulo a certain complexity-theoretic assumption, namely, the Exponential Time Hypothesis (ETH). In this paper we present a closely related result. We show that even MSO1 model-checking with a fixed set of vertex labels, but without edge-set quantifications, is not in XP wrt. the formula size as parameter for graph classes which are subgraph-closed and whose tree-width is poly-logarithmically unbounded unless the non-uniform ETH fails. In comparison to Kreutzer and Tazari; (1)(1) we use a stronger prerequisite, namely non-uniform instead of uniform ETH, to avoid the effectiveness assumption and the construction of certain obstructions used in their proofs; and (2)(2) we assume a different set of problems to be efficiently decidable, namely MSO1-definable properties on vertex labeled graphs instead of MSO2-definable properties on unlabeled graphs. Our result has an interesting consequence in the realm of digraph width measures: Strengthening the recent result, we show that no subdigraph-monotone measure can be "algorithmically useful", unless it is within a poly-logarithmic factor of undirected tree-width

    On the Directed Full Degree Spanning Tree Problem

    Get PDF
    Abstract. We study the parameterized complexity of a directed analog of the Full Degree Spanning Tree problem where, given a digraph D and a nonnegative integer k, the goal is to construct a spanning out-tree T of D such that at least k vertices in T have the same out-degree as in D. We show that this problem is W[1]-hard even on the class of directed acyclic graphs. In the dual version, called Reduced Degree Spanning Tree, one is required to construct a spanning out-tree T such that at most k vertices in T have out-degrees that are different from that in D. We show that this problem is fixed-parameter tractable and that it admits a problem kernel with at most 8k vertices on strongly connected digraphs and O(k 2 ) vertices on general digraphs. We also give an algorithm for this problem on general digraphs with running time O(5.942 k · n O(1) ), where n is the number of vertices in the input digraph
    corecore